Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
Path planning in the multi-robot system refers to calculating a set of actions for each robot, which will move each robot to its goal without conflicting with other robots. Lately, the research topic has received significant attention for its extensive applications, such as airport ground, drone swarms, and automatic warehouses. Despite these available research results, most of the existing investigations are concerned with the cases of robots with a fixed movement speed without considering uncertainty. Therefore, in this work, we study the problem of path-planning in the multi-robot automatic warehouse context, which considers the time-varying and uncertain robots' movement speed. Specifically, the path-planning module searches a path with as few conflicts as possible for a single agent by calculating traffic cost based on customarily distributed conflict probability and combining it with the classic A* algorithm. However, this probability-based method cannot eliminate all conflicts, and speed's uncertainty will constantly cause new conflicts. As a supplement, we propose the other two modules. The conflict detection and re-planning module chooses objects requiring re-planning paths from the agents involved in different types of conflicts periodically by our designed rules. Also, at each step, the scheduling module fills up the agent's preserved queue and decides who has a higher priority when the same element is assigned to two agents simultaneously. Finally, we compare the proposed algorithm with other algorithms from academia and industry, and the results show that the proposed method is validated as the best performance.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
在拒绝的环境中进行搜索对于群体机器人来说是具有挑战性的,因为不允许GNSS,映射,数据共享和中央处理的帮助。但是,使用嗅觉和听觉像动物一样合作可能是改善群体合作的重要方法。在本文中,提出了一群自主机器人来探索拒绝环境的嗅觉审计算法算法(OA-BUG)。构建了一个模拟环境,以衡量OA-BUG的性能。使用OA-BUG的搜索任务覆盖范围可以达到96.93%,与类似的算法SGBA相比,最大的40.55%提高了40.55%。此外,在实际的群机器人上进行了实验,以证明OA-BUG的有效性。结果表明,OA-BUG可以在被拒绝的环境中改善群体机器人的性能。
translated by 谷歌翻译
联合学习(FL)是一种机器学习范式,允许分散的客户在不共享其私人数据的情况下进行协作学习。但是,过度的计算和沟通要求对当前的FL框架构成挑战,尤其是在训练大型模型时。为了防止这些问题阻碍FL系统的部署,我们提出了一个轻巧的框架,客户共同学习融合由多个固定预训练的模型生成的表示形式,而不是从SCRATCH培训大型模型。这通过考虑如何从预先训练的模型中捕获更多特定于客户的信息,并共同提高每个客户利用这些现成模型的能力,从而导致我们解决了一个更实用的FL问题。在这项工作中,我们设计了一种联合原型对比度学习(FEDPCL)方法,该方法通过其类原型共享客户的知识,并以原型对比度方式构建特定于客户的表示。共享原型而不是可学习的模型参数可以使每个客户以个性化的方式融合表示表示,同时以紧凑的形式保持共享知识以进行有效的通信。我们在轻量级框架中对拟议的FEDPCL进行了彻底的评估,以测量和可视化其在流行的FL数据集上融合各种预训练模型的能力。
translated by 谷歌翻译
主流对象检测器通常由两个子任务组成,包括由两个并行头部实现的分类和回归任务。这种经典的设计范式不可避免地会导致分类得分和本地化质量(IOU)之间的空间分布不一致。因此,本文从知识蒸馏的角度来减轻这种错位。首先,我们观察到,与轻量级学生相比,庞大的老师获得的和谐预测比例更高。基于这个有趣的观察,设计了一种新颖的和谐评分(HS),以估计分类和回归质量的一致性。 HS对两个子任务之间的关系进行建模,并被视为先验知识,以促进学生的和谐预测。其次,这种空间未对准将在提炼特征时会导致选择性区域的选择。为了减轻这个问题,通过灵活平衡分类和回归任务的贡献,提出了一种新颖的任务功能蒸馏(TFD)。最终,HD和TFD构成了所提出的方法,称为任务均衡蒸馏(TBD)。广泛的实验证明了该方法的巨大潜力和概括。具体而言,当配备TBD时,带有Resnet-50的视网膜在可可基准下获得41.0地图,表现优于最近的FGD和FRS。
translated by 谷歌翻译
通用事件边界检测(GEBD)是视频理解中的一项重要但挑战性的任务,该任务旨在检测人类自然感知事件边界的时刻。在本文中,我们为GEBD任务提供了本地上下文建模和全局边界解码方法。提出了局部上下文建模子网络来感知通用事件边界的各种模式,并生成强大的视频表示和可靠的边界信心。基于它们,全局边界解码子网络被利用为从全局视图解码事件边界。我们提出的方法在动力学-GEBD测试集上达到了85.13%的F1得分,与基线方法相比,它实现了22%以上的F1得分增强。该代码可从https://github.com/jackytown/gebd_challenge_cvpr2022获得。
translated by 谷歌翻译
制造公司通常使用复杂的生产计划系统优化生产步骤,通常提供近乎最佳的解决方案。作为交付近乎最佳时间表的缺点,计划系统具有很高的计算需求,导致计算数小时。在正常情况下,如果在执行时间表之前有足够的缓冲时间(例如第二天晚上)。但是,如果发生意外的干扰,例如延迟零件交货或缺陷制造商品,计划的时间表可能无效,而迅速的重新植入变得必要。由于计算要求,这种立即进行的重复不适合现有的最佳规划师。本文提出了一种新颖的解决方案,可以在使用现有计划的不同类型的破坏情况下有效,有效地进行重新设计。该方法是基于想法,以尽可能多地遵守现有时间表,并根据有限的本地变化进行调整。为此,已经设计了一种基于代理的调度机制,其中代理代表材料和生产地点,并使用局部优化技术和谈判来生成适应的(足够但非最佳)时间表。该方法已使用华为的真实生产数据进行了评估,表明有效的时间表是在短时间内生产的。该系统已被实施为概念证明,目前已重新实现并转移到基于Jadex代理平台的生产系统中。
translated by 谷歌翻译
Tiktok是一个受欢迎的新社交媒体,用户通过短视频剪辑表达自己。平台上的常见互动形式参与了“挑战”,这是用户迭代的歌曲和舞蹈。挑战传染可以通过复制范围来衡量,即用户上传他们参与挑战的视频。 Tiktok平台的唯一性,其中挑战内容和用户偏好都在不断发展,需要挑战和用户表示的组合。本文通过预测用户的参与调查Tiktok挑战的社会传染。我们提出了一种新的深度学习模型,深度学习模型,学习和组合潜在的用户和挑战表格,以执行此用户挑战预测任务。我们从Fortoupage,App的登陆页面上的12个趋势挑战收集超过7,000个视频的数据集,从1303名用户提供超过10,000个视频。进行了广泛的实验,结果表明,我们所提出的Deepballenger(F1 = 0.494)在预测任务中优于基线(F1 = 0.188)。
translated by 谷歌翻译
头发编辑是计算机视觉和图形中有趣和挑战的问题。许多现有方法需要粗略的草图或掩码作为用于编辑的条件输入,但是这些交互既不直接也不高效。为了从繁琐的相互作用过程中获取用户,本文提出了一种新的头发编辑交互模式,其能够基于用户提供的文本或参考图像单独地或共同地操纵头发属性。为此目的,我们通过利用对比语言图像预训练(剪辑)模型的强大图像文本表示能力来编码共享嵌入空间中的图像和文本条件,并提出统一的头发编辑框架。通过精心设计的网络结构和丢失功能,我们的框架可以以脱谕方式执行高质量的头发编辑。广泛的实验在操纵准确性,编辑结果的视觉现实主义和无关的属性保存方面表现出我们的方法的优越性。项目repo是https://github.com/wty-ustc/hairclip。
translated by 谷歌翻译